OpenGL's GLX specification requires implementations to support various types of ancillary buffers. When there is no hardware support for these various types of buffers, OpenGL implementations are expected to support these buffers in software by allocating host memory.
GLX requires the contents of ancillary buffers to be shared between renderers binding to a window and the contents of these buffers to be retained even when no renderers are bound to the window. For hardware buffers, these requirements are typically straightforward to meet since the ancillary buffers exist in the hardware frame buffer.
OpenGL indirect rendering could easily allow ancillary buffers to be shared between renderers since all the buffers would exist in the X server's address space and the X server has immediate knowledge of the changing state of windows.
Combining direct rendering with retained, shared software ancillary buffers is difficult to achieve without compromising performance. The SGI direct rendering OpenGL implementation does not currently support the correct sharing of ancillary buffers between renderers in different address spaces. Each OpenGL library instance allocates software ancillary buffers for its own address space. These buffers can be shared between renderers in the same address space. Also, the contents of these buffers are retained only for the lifetime of the address space.
Incorrectly supporting software buffers is strictly speaking a violation of what OpenGL requires. But few programs rely on sharing buffers across address spaces. Sharing software buffers is an area where SGI's OpenGL implementation does not properly isolate window state from rendering state. Further work needs to be done to support ancillary buffer sharing.
One problem that must be solved is the de-allocation of software ancillary buffers when windows are destroyed. The OpenGL library has no obvious way to find out when an X window it is maintaining software ancillary buffers for is destroyed so it can know to deallocate those buffers. Since the buffers tend to be quite large, leaking ancillary buffers is extremely expensive.
SGI solves the problem by adding a private extension to the X server that requests the X server to generate a SpecialDestroyNotify when a specified X window is destroyed. The first time an OpenGL rendering context is bound to a window, this request is made for the new window. Hooks in the X extension library ( libXext) allow SpecialDestroyNotify events to trigger a callback into OpenGL to deallocate the associated software buffers. The event is never seen by an X program. Other mechanisms such as using the standard X DestroyNotify event proved unreliable since the X client might not be selecting for that event.